59 research outputs found
GIS SOFTWARE DEVELOPMENT: SUMMER INTERNSHIP WITH CLARK LABS
This paper describes my personal internship experience during the summer of 2014. I worked as a part-time student research assistant at Clark Labs and focused on the development of new modules for IDRISI GIS software. I created the new LANDSAT module for importing and preprocessing Landsat Archive imagery. I also created an option in the TassCap module for performing Landsat 8 Tasseled Cap transformation. Through collaboration with GIS and remote sensing professionals at Clark Labs, I successfully applied my geospatial knowledge to real-world software development works. This experience also sharpened the skills I learned at Clark University and was directly related to my career goals. I highly recommend this internship to future GISDE students who wish to apply their knowledge to programming GIS software that facilitates geographers’ understanding of the world
Improving the Generalizability of Trajectory Prediction Models with Frenet-Based Domain Normalization
Predicting the future trajectories of nearby objects plays a pivotal role in
Robotics and Automation such as autonomous driving. While learning-based
trajectory prediction methods have achieved remarkable performance on public
benchmarks, the generalization ability of these approaches remains
questionable. The poor generalizability on unseen domains, a well-recognized
defect of data-driven approaches, can potentially harm the real-world
performance of trajectory prediction models. We are thus motivated to improve
generalization ability of models instead of merely pursuing high accuracy on
average. Due to the lack of benchmarks for quantifying the generalization
ability of trajectory predictors, we first construct a new benchmark called
argoverse-shift, where the data distributions of domains are significantly
different. Using this benchmark for evaluation, we identify that the domain
shift problem seriously hinders the generalization of trajectory predictors
since state-of-the-art approaches suffer from severe performance degradation
when facing those out-of-distribution scenes. To enhance the robustness of
models against domain shift problem, we propose a plug-and-play strategy for
domain normalization in trajectory prediction. Our strategy utilizes the Frenet
coordinate frame for modeling and can effectively narrow the domain gap of
different scenes caused by the variety of road geometry and topology.
Experiments show that our strategy noticeably boosts the prediction performance
of the state-of-the-art in domains that were previously unseen to the models,
thereby improving the generalization ability of data-driven trajectory
prediction methods.Comment: This paper was accepted by 2023 IEEE International Conference on
Robotics and Automation (ICRA
FairAdaBN: Mitigating unfairness with adaptive batch normalization and its application to dermatological disease classification
Deep learning is becoming increasingly ubiquitous in medical research and
applications while involving sensitive information and even critical diagnosis
decisions. Researchers observe a significant performance disparity among
subgroups with different demographic attributes, which is called model
unfairness, and put lots of effort into carefully designing elegant
architectures to address unfairness, which poses heavy training burden, brings
poor generalization, and reveals the trade-off between model performance and
fairness. To tackle these issues, we propose FairAdaBN by making batch
normalization adaptive to sensitive attribute. This simple but effective design
can be adopted to several classification backbones that are originally unaware
of fairness. Additionally, we derive a novel loss function that restrains
statistical parity between subgroups on mini-batches, encouraging the model to
converge with considerable fairness. In order to evaluate the trade-off between
model performance and fairness, we propose a new metric, named
Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness
improvement over accuracy drop. Experiments on two dermatological datasets show
that our proposed method outperforms other methods on fairness criteria and
FATE.Comment: Accepted by MICCAI 202
Precious but convenient means of prevention and treatment: physiological molecular mechanisms of interaction between exercise and motor factors and Alzheimer’s disease
Disproportionate to the severity of Alzheimer’s disease (AD) and the huge number of patients, the exact treatment and prevention of AD is still being explored. With increasing ageing, the search for means to prevent and treat AD has become a high priority. In the search for AD, it has been suggested that exercise may be one of the more effective and less costly means of preventing and treating AD, and therefore a large part of current research is aimed at exploring the effectiveness of exercise in the prevention and treatment of AD. However, due to the complexity of the specific pathogenesis of AD, there are multiple hypotheses and potential mechanisms for exercise interventions in AD that need to be explored. This review therefore specifically summarises the hypotheses of the interaction between exercise and AD from a molecular perspective, based on the available evidence from animal models or human experiments, and explores them categorised according to the pathologies associated with AD: exercise can activate a number of signalling pathways inhibited by AD (e.g., Wnt and PI3K/Akt signalling pathways) and reactivate the effects of downstream factors regulated by these signalling pathways, thus acting to alleviate autophagic dysfunction, relieve neuroinflammation and mitigate Aβ deposition. In addition, this paper introduces a new approach to regulate the blood-brain barrier, i.e., to restore the stability of the blood-brain barrier, reduce abnormal phosphorylation of tau proteins and reduce neuronal apoptosis. In addition, this paper introduces a new concept.” Motor factors” or “Exerkines”, which act on AD through autocrine, paracrine or endocrine stimulation in response to movement. In this process, we believe there may be great potential for research in three areas: (1) the alleviation of AD through movement in the brain-gut axis (2) the prevention and treatment of AD by movement combined with polyphenols (3) the continued exploration of movement-mediated activation of the Wnt signalling pathway and AD
A Survey of Large Language Models
Language is essentially a complex, intricate system of human expressions
governed by grammatical rules. It poses a significant challenge to develop
capable AI algorithms for comprehending and grasping a language. As a major
approach, language modeling has been widely studied for language understanding
and generation in the past two decades, evolving from statistical language
models to neural language models. Recently, pre-trained language models (PLMs)
have been proposed by pre-training Transformer models over large-scale corpora,
showing strong capabilities in solving various NLP tasks. Since researchers
have found that model scaling can lead to performance improvement, they further
study the scaling effect by increasing the model size to an even larger size.
Interestingly, when the parameter scale exceeds a certain level, these enlarged
language models not only achieve a significant performance improvement but also
show some special abilities that are not present in small-scale language
models. To discriminate the difference in parameter scale, the research
community has coined the term large language models (LLM) for the PLMs of
significant size. Recently, the research on LLMs has been largely advanced by
both academia and industry, and a remarkable progress is the launch of ChatGPT,
which has attracted widespread attention from society. The technical evolution
of LLMs has been making an important impact on the entire AI community, which
would revolutionize the way how we develop and use AI algorithms. In this
survey, we review the recent advances of LLMs by introducing the background,
key findings, and mainstream techniques. In particular, we focus on four major
aspects of LLMs, namely pre-training, adaptation tuning, utilization, and
capacity evaluation. Besides, we also summarize the available resources for
developing LLMs and discuss the remaining issues for future directions.Comment: ongoing work; 51 page
Real-time Monitoring for the Next Core-Collapse Supernova in JUNO
Core-collapse supernova (CCSN) is one of the most energetic astrophysical
events in the Universe. The early and prompt detection of neutrinos before
(pre-SN) and during the SN burst is a unique opportunity to realize the
multi-messenger observation of the CCSN events. In this work, we describe the
monitoring concept and present the sensitivity of the system to the pre-SN and
SN neutrinos at the Jiangmen Underground Neutrino Observatory (JUNO), which is
a 20 kton liquid scintillator detector under construction in South China. The
real-time monitoring system is designed with both the prompt monitors on the
electronic board and online monitors at the data acquisition stage, in order to
ensure both the alert speed and alert coverage of progenitor stars. By assuming
a false alert rate of 1 per year, this monitoring system can be sensitive to
the pre-SN neutrinos up to the distance of about 1.6 (0.9) kpc and SN neutrinos
up to about 370 (360) kpc for a progenitor mass of 30 for the case
of normal (inverted) mass ordering. The pointing ability of the CCSN is
evaluated by using the accumulated event anisotropy of the inverse beta decay
interactions from pre-SN or SN neutrinos, which, along with the early alert,
can play important roles for the followup multi-messenger observations of the
next Galactic or nearby extragalactic CCSN.Comment: 24 pages, 9 figure
Applications of flexible electronics related to cardiocerebral vascular system
Ensuring accessible and high-quality healthcare worldwide requires field-deployable and affordable clinical diagnostic tools with high performance. In recent years, flexible electronics with wearable and implantable capabilities have garnered significant attention from researchers, which functioned as vital clinical diagnostic-assisted tools by real-time signal transmission from interested targets in vivo. As the most crucial and complex system of human body, cardiocerebral vascular system together with heart-brain network attracts researchers inputting profuse and indefatigable efforts on proper flexible electronics design and materials selection, trying to overcome the impassable gulf between vivid organisms and rigid inorganic units. This article reviews recent breakthroughs in flexible electronics specifically applied to cardiocerebral vascular system and heart-brain network. Relevant sensor types and working principles, electronics materials selection and treatment methods are expounded. Applications of flexible electronics related to these interested organs and systems are specially highlighted. Through precedent great working studies, we conclude their merits and point out some limitations in this emerging field, thus will help to pave the way for revolutionary flexible electronics and diagnosis assisted tools development
Suppressing the Spikes in Electroencephalogram via an Iterative Joint Singular Spectrum Analysis and Low-Rank Decomposition Approach
The novelty and the contribution of this paper consists of applying an iterative joint singular spectrum analysis and low-rank decomposition approach for suppressing the spikes in an electroencephalogram. First, an electroencephalogram is filtered by an ideal lowpass filter via removing its discrete Fourier transform coefficients outside the δ wave band, the θ wave band, the α wave band, the β wave band and the γ wave band. Second, the singular spectrum analysis is performed on the filtered electroencephalogram to obtain the singular spectrum analysis components. The singular spectrum analysis components are sorted according to the magnitudes of their corresponding eigenvalues. The singular spectrum analysis components are sequentially added together starting from the last singular spectrum analysis component. If the variance of the summed singular spectrum analysis component under the unit energy normalization is larger than a threshold value, then the summation is terminated. The summed singular spectrum analysis component forms the first scale of the electroencephalogram. The rest singular spectrum analysis components are also summed up together separately to form the residue of the electroencephalogram. Next, the low-rank decomposition is performed on the residue of the electroencephalogram to obtain both the low-rank component and the sparse component. The low-rank component is added to the previous scale of the electroencephalogram to obtain the next scale of the electroencephalogram. Finally, the above procedures are repeated on the sparse component until the variance of the current scale of the electroencephalogram under the unit energy normalization is larger than another threshold value. The computer numerical simulation results show that the spike suppression performance based on our proposed method outperforms that based on the state-of-the-art methods
- …